Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 19.050
Filtrar
1.
Sci Data ; 11(1): 373, 2024 Apr 12.
Artigo em Inglês | MEDLINE | ID: mdl-38609405

RESUMO

In recent years, the landscape of computer-assisted interventions and post-operative surgical video analysis has been dramatically reshaped by deep-learning techniques, resulting in significant advancements in surgeons' skills, operation room management, and overall surgical outcomes. However, the progression of deep-learning-powered surgical technologies is profoundly reliant on large-scale datasets and annotations. In particular, surgical scene understanding and phase recognition stand as pivotal pillars within the realm of computer-assisted surgery and post-operative assessment of cataract surgery videos. In this context, we present the largest cataract surgery video dataset that addresses diverse requisites for constructing computerized surgical workflow analysis and detecting post-operative irregularities in cataract surgery. We validate the quality of annotations by benchmarking the performance of several state-of-the-art neural network architectures for phase recognition and surgical scene segmentation. Besides, we initiate the research on domain adaptation for instrument segmentation in cataract surgery by evaluating cross-domain instrument segmentation performance in cataract surgery videos. The dataset and annotations are publicly available in Synapse.


Assuntos
Extração de Catarata , Catarata , Aprendizado Profundo , Gravação em Vídeo , Humanos , Benchmarking , Redes Neurais de Computação , Extração de Catarata/métodos
2.
Nat Commun ; 15(1): 3211, 2024 Apr 13.
Artigo em Inglês | MEDLINE | ID: mdl-38615042

RESUMO

T cells have the ability to eliminate infected and cancer cells and play an essential role in cancer immunotherapy. T cell activation is elicited by the binding of the T cell receptor (TCR) to epitopes displayed on MHC molecules, and the TCR specificity is determined by the sequence of its α and ß chains. Here, we collect and curate a dataset of 17,715 αßTCRs interacting with dozens of class I and class II epitopes. We use this curated data to develop MixTCRpred, an epitope-specific TCR-epitope interaction predictor. MixTCRpred accurately predicts TCRs recognizing several viral and cancer epitopes. MixTCRpred further provides a useful quality control tool for multiplexed single-cell TCR sequencing assays of epitope-specific T cells and pinpoints a substantial fraction of putative contaminants in public databases. Analysis of epitope-specific dual α T cells demonstrates that MixTCRpred can identify α chains mediating epitope recognition. Applying MixTCRpred to TCR repertoires from COVID-19 patients reveals enrichment of clonotypes predicted to bind an immunodominant SARS-CoV-2 epitope. Overall, MixTCRpred provides a robust tool to predict TCRs interacting with specific epitopes and interpret TCR-sequencing data from both bulk and epitope-specific T cells.


Assuntos
COVID-19 , Aprendizado Profundo , Humanos , Linfócitos T , Epitopos , Epitopos Imunodominantes
3.
BMC Med Imaging ; 24(1): 89, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38622546

RESUMO

BACKGROUND: Accurate preoperative identification of ovarian tumour subtypes is imperative for patients as it enables physicians to custom-tailor precise and individualized management strategies. So, we have developed an ultrasound (US)-based multiclass prediction algorithm for differentiating between benign, borderline, and malignant ovarian tumours. METHODS: We randomised data from 849 patients with ovarian tumours into training and testing sets in a ratio of 8:2. The regions of interest on the US images were segmented and handcrafted radiomics features were extracted and screened. We applied the one-versus-rest method in multiclass classification. We inputted the best features into machine learning (ML) models and constructed a radiomic signature (Rad_Sig). US images of the maximum trimmed ovarian tumour sections were inputted into a pre-trained convolutional neural network (CNN) model. After internal enhancement and complex algorithms, each sample's predicted probability, known as the deep transfer learning signature (DTL_Sig), was generated. Clinical baseline data were analysed. Statistically significant clinical parameters and US semantic features in the training set were used to construct clinical signatures (Clinic_Sig). The prediction results of Rad_Sig, DTL_Sig, and Clinic_Sig for each sample were fused as new feature sets, to build the combined model, namely, the deep learning radiomic signature (DLR_Sig). We used the receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC) to estimate the performance of the multiclass classification model. RESULTS: The training set included 440 benign, 44 borderline, and 196 malignant ovarian tumours. The testing set included 109 benign, 11 borderline, and 49 malignant ovarian tumours. DLR_Sig three-class prediction model had the best overall and class-specific classification performance, with micro- and macro-average AUC of 0.90 and 0.84, respectively, on the testing set. Categories of identification AUC were 0.84, 0.85, and 0.83 for benign, borderline, and malignant ovarian tumours, respectively. In the confusion matrix, the classifier models of Clinic_Sig and Rad_Sig could not recognise borderline ovarian tumours. However, the proportions of borderline and malignant ovarian tumours identified by DLR_Sig were the highest at 54.55% and 63.27%, respectively. CONCLUSIONS: The three-class prediction model of US-based DLR_Sig can discriminate between benign, borderline, and malignant ovarian tumours. Therefore, it may guide clinicians in determining the differential management of patients with ovarian tumours.


Assuntos
Aprendizado Profundo , Neoplasias Ovarianas , Humanos , Feminino , 60570 , Neoplasias Ovarianas/diagnóstico por imagem , Ultrassonografia , Algoritmos , Estudos Retrospectivos
4.
World J Urol ; 42(1): 238, 2024 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-38627315

RESUMO

BACKGROUND: Accurate estimation of the glomerular filtration rate (GFR) is clinically crucial for determining the status of obstruction, developing treatment strategies, and predicting prognosis in obstructive nephropathy (ON). We aimed to develop a deep learning-based system, named UroAngel, for non-invasive and convenient prediction of single-kidney function level. METHODS: We retrospectively collected computed tomography urography (CTU) images and emission computed tomography diagnostic reports of 520 ON patients. A 3D U-Net model was used to segment the renal parenchyma, and a logistic regression multi-classification model was used to predict renal function level. We compared the predictive performance of UroAngel with the Modification of Diet in Renal Disease (MDRD), Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equations, and two expert radiologists in an additional 40 ON patients to validate clinical effectiveness. RESULTS: UroAngel based on 3D U-Net convolutional neural network could segment the renal cortex accurately, with a Dice similarity coefficient of 0.861. Using the segmented renal cortex to predict renal function stage had high performance with an accuracy of 0.918, outperforming MDRD and CKD-EPI and two radiologists. CONCLUSIONS: We proposed an automated 3D U-Net-based analysis system for direct prediction of single-kidney function stage from CTU images. UroAngel could accurately predict single-kidney function in ON patients, providing a novel, reliable, convenient, and non-invasive method.


Assuntos
Aprendizado Profundo , Insuficiência Renal Crônica , Rim Único , Humanos , Estudos Retrospectivos , Rim/diagnóstico por imagem , Insuficiência Renal Crônica/diagnóstico , Taxa de Filtração Glomerular , Tomografia , Creatinina
5.
Front Endocrinol (Lausanne) ; 15: 1365350, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38628586

RESUMO

Background: Thyroid-associated ophthalmopathy (TAO) is the most prevalent autoimmune orbital condition, significantly impacting patients' appearance and quality of life. Early and accurate identification of active TAO along with timely treatment can enhance prognosis and reduce the occurrence of severe cases. Although the Clinical Activity Score (CAS) serves as an effective assessment system for TAO, it is susceptible to assessor experience bias. This study aimed to develop an ensemble deep learning system that combines anterior segment slit-lamp photographs of patients with facial images to simulate expert assessment of TAO. Method: The study included 156 patients with TAO who underwent detailed diagnosis and treatment at Shanxi Eye Hospital Affiliated to Shanxi Medical University from May 2020 to September 2023. Anterior segment slit-lamp photographs and facial images were used as different modalities and analyzed from multiple perspectives. Two ophthalmologists with more than 10 years of clinical experience independently determined the reference CAS for each image. An ensemble deep learning model based on the residual network was constructed under supervised learning to predict five key inflammatory signs (redness of the eyelids and conjunctiva, and swelling of the eyelids, conjunctiva, and caruncle or plica) associated with TAO, and to integrate these objective signs with two subjective symptoms (spontaneous retrobulbar pain and pain on attempted upward or downward gaze) in order to assess TAO activity. Results: The proposed model achieved 0.906 accuracy, 0.833 specificity, 0.906 precision, 0.906 recall, and 0.906 F1-score in active TAO diagnosis, demonstrating advanced performance in predicting CAS and TAO activity signs compared to conventional single-view unimodal approaches. The integration of multiple views and modalities, encompassing both anterior segment slit-lamp photographs and facial images, significantly improved the prediction accuracy of the model for TAO activity and CAS. Conclusion: The ensemble multi-view multimodal deep learning system developed in this study can more accurately assess the clinical activity of TAO than traditional methods that solely rely on facial images. This innovative approach is intended to enhance the efficiency of TAO activity assessment, providing a novel means for its comprehensive, early, and precise evaluation.


Assuntos
Aprendizado Profundo , Oftalmopatia de Graves , Humanos , Oftalmopatia de Graves/diagnóstico por imagem , Qualidade de Vida , Órbita , Dor
6.
Radiol Artif Intell ; 6(3): e240137, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38629960
7.
Tech Coloproctol ; 28(1): 44, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38561492

RESUMO

BACKGROUND: Imaging is vital for assessing rectal cancer, with endoanal ultrasound (EAUS) being highly accurate in large tertiary medical centers. However, EAUS accuracy drops outside such settings, possibly due to varied examiner experience and fewer examinations. This underscores the need for an AI-based system to enhance accuracy in non-specialized centers. This study aimed to develop and validate deep learning (DL) models to differentiate rectal cancer in standard EAUS images. METHODS: A transfer learning approach with fine-tuned DL architectures was employed, utilizing a dataset of 294 images. The performance of DL models was assessed through a tenfold cross-validation. RESULTS: The DL diagnostics model exhibited a sensitivity and accuracy of 0.78 each. In the identification phase, the automatic diagnostic platform achieved an area under the curve performance of 0.85 for diagnosing rectal cancer. CONCLUSIONS: This research demonstrates the potential of DL models in enhancing rectal cancer detection during EAUS, especially in settings with lower examiner experience. The achieved sensitivity and accuracy suggest the viability of incorporating AI support for improved diagnostic outcomes in non-specialized medical centers.


Assuntos
Aprendizado Profundo , Neoplasias Retais , Humanos , Endossonografia/métodos , Ultrassonografia/métodos , Redes Neurais de Computação , Neoplasias Retais/diagnóstico por imagem
8.
Int J Mol Sci ; 25(7)2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-38612943

RESUMO

Clear cell renal carcinoma (ccRCC), the most common subtype of renal cell carcinoma, has the high heterogeneity of a highly complex tumor microenvironment. Existing clinical intervention strategies, such as target therapy and immunotherapy, have failed to achieve good therapeutic effects. In this article, single-cell transcriptome sequencing (scRNA-seq) data from six patients downloaded from the GEO database were adopted to describe the tumor microenvironment (TME) of ccRCC, including its T cells, tumor-associated macrophages (TAMs), endothelial cells (ECs), and cancer-associated fibroblasts (CAFs). Based on the differential typing of the TME, we identified tumor cell-specific regulatory programs that are mediated by three key transcription factors (TFs), whilst the TF EPAS1/HIF-2α was identified via drug virtual screening through our analysis of ccRCC's protein structure. Then, a combined deep graph neural network and machine learning algorithm were used to select anti-ccRCC compounds from bioactive compound libraries, including the FDA-approved drug library, natural product library, and human endogenous metabolite compound library. Finally, five compounds were obtained, including two FDA-approved drugs (flufenamic acid and fludarabine), one endogenous metabolite, one immunology/inflammation-related compound, and one inhibitor of DNA methyltransferase (N4-methylcytidine, a cytosine nucleoside analogue that, like zebularine, has the mechanism of inhibiting DNA methyltransferase). Based on the tumor microenvironment characteristics of ccRCC, five ccRCC-specific compounds were identified, which would give direction of the clinical treatment for ccRCC patients.


Assuntos
Carcinoma de Células Renais , Aprendizado Profundo , Neoplasias Renais , Humanos , Carcinoma de Células Renais/tratamento farmacológico , Células Endoteliais , Algoritmos , Análise de Célula Única , Antimetabólitos , Metilases de Modificação do DNA , Descoberta de Drogas , Neoplasias Renais/tratamento farmacológico , DNA , Microambiente Tumoral
9.
Nutrients ; 16(7)2024 Apr 06.
Artigo em Inglês | MEDLINE | ID: mdl-38613106

RESUMO

In industry 4.0, where the automation and digitalization of entities and processes are fundamental, artificial intelligence (AI) is increasingly becoming a pivotal tool offering innovative solutions in various domains. In this context, nutrition, a critical aspect of public health, is no exception to the fields influenced by the integration of AI technology. This study aims to comprehensively investigate the current landscape of AI in nutrition, providing a deep understanding of the potential of AI, machine learning (ML), and deep learning (DL) in nutrition sciences and highlighting eventual challenges and futuristic directions. A hybrid approach from the systematic literature review (SLR) guidelines and the preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines was adopted to systematically analyze the scientific literature from a search of major databases on artificial intelligence in nutrition sciences. A rigorous study selection was conducted using the most appropriate eligibility criteria, followed by a methodological quality assessment ensuring the robustness of the included studies. This review identifies several AI applications in nutrition, spanning smart and personalized nutrition, dietary assessment, food recognition and tracking, predictive modeling for disease prevention, and disease diagnosis and monitoring. The selected studies demonstrated the versatility of machine learning and deep learning techniques in handling complex relationships within nutritional datasets. This study provides a comprehensive overview of the current state of AI applications in nutrition sciences and identifies challenges and opportunities. With the rapid advancement in AI, its integration into nutrition holds significant promise to enhance individual nutritional outcomes and optimize dietary recommendations. Researchers, policymakers, and healthcare professionals can utilize this research to design future projects and support evidence-based decision-making in AI for nutrition and dietary guidance.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Humanos , Aprendizado de Máquina , Estado Nutricional , Automação
10.
J Med Internet Res ; 26: e55794, 2024 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-38625718

RESUMO

BACKGROUND: Early detection of adverse events and their management are crucial to improving anticancer treatment outcomes, and listening to patients' subjective opinions (patients' voices) can make a major contribution to improving safety management. Recent progress in deep learning technologies has enabled various new approaches for the evaluation of safety-related events based on patient-generated text data, but few studies have focused on the improvement of real-time safety monitoring for individual patients. In addition, no study has yet been performed to validate deep learning models for screening patients' narratives for clinically important adverse event signals that require medical intervention. In our previous work, novel deep learning models have been developed to detect adverse event signals for hand-foot syndrome or adverse events limiting patients' daily lives from the authored narratives of patients with cancer, aiming ultimately to use them as safety monitoring support tools for individual patients. OBJECTIVE: This study was designed to evaluate whether our deep learning models can screen clinically important adverse event signals that require intervention by health care professionals. The applicability of our deep learning models to data on patients' concerns at pharmacies was also assessed. METHODS: Pharmaceutical care records at community pharmacies were used for the evaluation of our deep learning models. The records followed the SOAP format, consisting of subjective (S), objective (O), assessment (A), and plan (P) columns. Because of the unique combination of patients' concerns in the S column and the professional records of the pharmacists, this was considered a suitable data for the present purpose. Our deep learning models were applied to the S records of patients with cancer, and the extracted adverse event signals were assessed in relation to medical actions and prescribed drugs. RESULTS: From 30,784 S records of 2479 patients with at least 1 prescription of anticancer drugs, our deep learning models extracted true adverse event signals with more than 80% accuracy for both hand-foot syndrome (n=152, 91%) and adverse events limiting patients' daily lives (n=157, 80.1%). The deep learning models were also able to screen adverse event signals that require medical intervention by health care providers. The extracted adverse event signals could reflect the side effects of anticancer drugs used by the patients based on analysis of prescribed anticancer drugs. "Pain or numbness" (n=57, 36.3%), "fever" (n=46, 29.3%), and "nausea" (n=40, 25.5%) were common symptoms out of the true adverse event signals identified by the model for adverse events limiting patients' daily lives. CONCLUSIONS: Our deep learning models were able to screen clinically important adverse event signals that require intervention for symptoms. It was also confirmed that these deep learning models could be applied to patients' subjective information recorded in pharmaceutical care records accumulated during pharmacists' daily work.


Assuntos
Antineoplásicos , Aprendizado Profundo , Síndrome Mão-Pé , Neoplasias , Humanos , Prescrições , Antineoplásicos/efeitos adversos , Neoplasias/tratamento farmacológico
12.
Sci Rep ; 14(1): 8504, 2024 04 12.
Artigo em Inglês | MEDLINE | ID: mdl-38605094

RESUMO

This work aims to investigate the clinical feasibility of deep learning-based synthetic CT images for cervix cancer, comparing them to MR for calculating attenuation (MRCAT). Patient cohort with 50 pairs of T2-weighted MR and CT images from cervical cancer patients was split into 40 for training and 10 for testing phases. We conducted deformable image registration and Nyul intensity normalization for MR images to maximize the similarity between MR and CT images as a preprocessing step. The processed images were plugged into a deep learning model, generative adversarial network. To prove clinical feasibility, we assessed the accuracy of synthetic CT images in image similarity using structural similarity (SSIM) and mean-absolute-error (MAE) and dosimetry similarity using gamma passing rate (GPR). Dose calculation was performed on the true and synthetic CT images with a commercial Monte Carlo algorithm. Synthetic CT images generated by deep learning outperformed MRCAT images in image similarity by 1.5% in SSIM, and 18.5 HU in MAE. In dosimetry, the DL-based synthetic CT images achieved 98.71% and 96.39% in the GPR at 1% and 1 mm criterion with 10% and 60% cut-off values of the prescription dose, which were 0.9% and 5.1% greater GPRs over MRCAT images.


Assuntos
Aprendizado Profundo , Neoplasias do Colo do Útero , Feminino , Humanos , Neoplasias do Colo do Útero/diagnóstico por imagem , Estudos de Viabilidade , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Tomografia Computadorizada por Raios X/métodos , Planejamento da Radioterapia Assistida por Computador/métodos
13.
Sci Data ; 11(1): 365, 2024 Apr 11.
Artigo em Inglês | MEDLINE | ID: mdl-38605088

RESUMO

Optical coherence tomography (OCT) is a non-invasive imaging technique with extensive clinical applications in ophthalmology. OCT enables the visualization of the retinal layers, playing a vital role in the early detection and monitoring of retinal diseases. OCT uses the principle of light wave interference to create detailed images of the retinal microstructures, making it a valuable tool for diagnosing ocular conditions. This work presents an open-access OCT dataset (OCTDL) comprising over 2000 OCT images labeled according to disease group and retinal pathology. The dataset consists of OCT records of patients with Age-related Macular Degeneration (AMD), Diabetic Macular Edema (DME), Epiretinal Membrane (ERM), Retinal Artery Occlusion (RAO), Retinal Vein Occlusion (RVO), and Vitreomacular Interface Disease (VID). The images were acquired with an Optovue Avanti RTVue XR using raster scanning protocols with dynamic scan length and image resolution. Each retinal b-scan was acquired by centering on the fovea and interpreted and cataloged by an experienced retinal specialist. In this work, we applied Deep Learning classification techniques to this new open-access dataset.


Assuntos
Aprendizado Profundo , Retina , Doenças Retinianas , Tomografia de Coerência Óptica , Humanos , Retinopatia Diabética/diagnóstico por imagem , Edema Macular/diagnóstico por imagem , Retina/diagnóstico por imagem , Doenças Retinianas/diagnóstico por imagem
14.
Zhongguo Yi Liao Qi Xie Za Zhi ; 48(2): 126-131, 2024 Mar 30.
Artigo em Chinês | MEDLINE | ID: mdl-38605609

RESUMO

A deep learning-based model for automatic diagnosis and classification of adolescent idiopathic scoliosis has been constructed. This model mainly included key points detection and Cobb angle measurement. 748 full-length standing spinal X-ray images were retrospectively collected, of which 602 images were used to train and validate the model, and 146 images were used to test the model performance. The results showed that the model had good diagnostic and classification performance, with an accuracy of 94.5%. Compared with experts' measurement, 94.9% of its Cobb angle measurement results were within the clinically acceptable range. The average absolute difference was 2.1°, and the consistency was also excellent (r2≥0.9552, P<0.001). In the future, this model could be applied clinically to improve doctors' diagnostic efficiency.


Assuntos
Aprendizado Profundo , Escoliose , Adolescente , Humanos , Escoliose/diagnóstico por imagem , Estudos Retrospectivos , Coluna Vertebral , Radiografia
15.
Zhongguo Yi Liao Qi Xie Za Zhi ; 48(2): 138-143, 2024 Mar 30.
Artigo em Chinês | MEDLINE | ID: mdl-38605611

RESUMO

Adrenal vein sampling is required for the staging diagnosis of primary aldosteronism, and the frames in which the adrenal veins are presented are called key frames. Currently, the selection of key frames relies on the doctor's visual judgement which is time-consuming and laborious. This study proposes a key frame recognition algorithm based on deep learning. Firstly, wavelet denoising and multi-scale vessel-enhanced filtering are used to preserve the morphological features of the adrenal veins. Furthermore, by incorporating the self-attention mechanism, an improved recognition model called ResNet50-SA is obtained. Compared with commonly used transfer learning, the new model achieves 97.11% in accuracy, precision, recall, F1, and AUC, which is superior to other models and can help clinicians quickly identify key frames in adrenal veins.


Assuntos
Aprendizado Profundo , Raios X , Radiografia
16.
Zhongguo Yi Liao Qi Xie Za Zhi ; 48(2): 144-149, 2024 Mar 30.
Artigo em Chinês | MEDLINE | ID: mdl-38605612

RESUMO

Objective: A deep learning-based method for evaluating the quality of pediatric pelvic X-ray images is proposed to construct a diagnostic model and verify its clinical feasibility. Methods: Three thousand two hundred and forty-seven children with anteroposteric pelvic radiographs are retrospectively collected and randomly divided into training datasets, validation datasets and test datasets. Artificial intelligence model is conducted to evaluate the reliability of quality control model. Results: The diagnostic accuracy, area under ROC curve, sensitivity and specificity of the model are 99.4%, 0.993, 98.6% and 100.0%, respectively. The 95% consistency limit of the pelvic tilt index of the model is -0.052-0.072. The 95% consistency threshold of pelvic rotation index is -0.088-0.055. Conclusion: This is the first attempt to apply AI algorithm to the quality assessment of children's pelvic radiographs, and has significantly improved the diagnosis and treatment status of DDH in children.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Humanos , Criança , Estudos Retrospectivos , Reprodutibilidade dos Testes , Raios X
17.
Front Endocrinol (Lausanne) ; 15: 1370838, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38606087

RESUMO

Purpose: To develop and validate a deep learning radiomics (DLR) model that uses X-ray images to predict the classification of osteoporotic vertebral fractures (OVFs). Material and methods: The study encompassed a cohort of 942 patients, involving examinations of 1076 vertebrae through X-ray, CT, and MRI across three distinct hospitals. The OVFs were categorized as class 0, 1, or 2 based on the Assessment System of Thoracolumbar Osteoporotic Fracture. The dataset was divided randomly into four distinct subsets: a training set comprising 712 samples, an internal validation set with 178 samples, an external validation set containing 111 samples, and a prospective validation set consisting of 75 samples. The ResNet-50 architectural model was used to implement deep transfer learning (DTL), undergoing -pre-training separately on the RadImageNet and ImageNet datasets. Features from DTL and radiomics were extracted and integrated using X-ray images. The optimal fusion feature model was identified through least absolute shrinkage and selection operator logistic regression. Evaluation of the predictive capabilities for OVFs classification involved eight machine learning models, assessed through receiver operating characteristic curves employing the "One-vs-Rest" strategy. The Delong test was applied to compare the predictive performance of the superior RadImageNet model against the ImageNet model. Results: Following pre-training separately on RadImageNet and ImageNet datasets, feature selection and fusion yielded 17 and 12 fusion features, respectively. Logistic regression emerged as the optimal machine learning algorithm for both DLR models. Across the training set, internal validation set, external validation set, and prospective validation set, the macro-average Area Under the Curve (AUC) based on the RadImageNet dataset surpassed those based on the ImageNet dataset, with statistically significant differences observed (P<0.05). Utilizing the binary "One-vs-Rest" strategy, the model based on the RadImageNet dataset demonstrated superior efficacy in predicting Class 0, achieving an AUC of 0.969 and accuracy of 0.863. Predicting Class 1 yielded an AUC of 0.945 and accuracy of 0.875, while for Class 2, the AUC and accuracy were 0.809 and 0.692, respectively. Conclusion: The DLR model, based on the RadImageNet dataset, outperformed the ImageNet model in predicting the classification of OVFs, with generalizability confirmed in the prospective validation set.


Assuntos
Aprendizado Profundo , Fraturas por Osteoporose , Fraturas da Coluna Vertebral , Humanos , Fraturas por Osteoporose/diagnóstico por imagem , 60570 , Raios X , Coluna Vertebral , Fraturas da Coluna Vertebral/diagnóstico por imagem
18.
Sensors (Basel) ; 24(7)2024 Mar 22.
Artigo em Inglês | MEDLINE | ID: mdl-38610238

RESUMO

The potential of microwave Doppler radar in non-contact vital sign detection is significant; however, prevailing radar-based heart rate (HR) and heart rate variability (HRV) monitoring technologies often necessitate data lengths surpassing 10 s, leading to increased detection latency and inaccurate HRV estimates. To address this problem, this paper introduces a novel network integrating a frequency representation module and a residual in residual module for the precise estimation and tracking of HR from concise time series, followed by HRV monitoring. The network adeptly transforms radar signals from the time domain to the frequency domain, yielding high-resolution spectrum representation within specified frequency intervals. This significantly reduces latency and improves HRV estimation accuracy by using data that are only 4 s in length. This study uses simulation data, Frequency-Modulated Continuous-Wave radar-measured data, and Continuous-Wave radar data to validate the model. Experimental results show that despite the shortened data length, the average heart rate measurement accuracy of the algorithm remains above 95% with no loss of estimation accuracy. This study contributes an efficient heart rate variability estimation algorithm to the domain of non-contact vital sign detection, offering significant practical application value.


Assuntos
Aprendizado Profundo , Frequência Cardíaca , Radar , Determinação da Frequência Cardíaca , Algoritmos
19.
Sensors (Basel) ; 24(7)2024 Mar 22.
Artigo em Inglês | MEDLINE | ID: mdl-38610258

RESUMO

In this paper, we propose an amount estimation method for food intake based on both color and depth images. Two pairs of color and depth images are captured pre- and post-meals. The pre- and post-meal color images are employed to detect food types and food existence regions using Mask R-CNN. The post-meal color image is spatially transformed to match the food region locations between the pre- and post-meal color images. The same transformation is also performed on the post-meal depth image. The pixel values of the post-meal depth image are compensated to reflect 3D position changes caused by the image transformation. In both the pre- and post-meal depth images, a space volume for each food region is calculated by dividing the space between the food surfaces and the camera into multiple tetrahedra. The food intake amounts are estimated as the difference in space volumes calculated from the pre- and post-meal depth images. From the simulation results, we verify that the proposed method estimates the food intake amount with an error of up to 2.2%.


Assuntos
Aprendizado Profundo , Simulação por Computador , Alimentos , Período Pós-Prandial , Ingestão de Alimentos
20.
Sensors (Basel) ; 24(7)2024 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-38610410

RESUMO

Frameworks for human activity recognition (HAR) can be applied in the clinical environment for monitoring patients' motor and functional abilities either remotely or within a rehabilitation program. Deep Learning (DL) models can be exploited to perform HAR by means of raw data, thus avoiding time-demanding feature engineering operations. Most works targeting HAR with DL-based architectures have tested the workflow performance on data related to a separate execution of the tasks. Hence, a paucity in the literature has been found with regard to frameworks aimed at recognizing continuously executed motor actions. In this article, the authors present the design, development, and testing of a DL-based workflow targeting continuous human activity recognition (CHAR). The model was trained on the data recorded from ten healthy subjects and tested on eight different subjects. Despite the limited sample size, the authors claim the capability of the proposed framework to accurately classify motor actions within a feasible time, thus making it potentially useful in a clinical scenario.


Assuntos
Aprendizado Profundo , Humanos , Atividades Humanas , Atividades Cotidianas , Engenharia , Voluntários Saudáveis
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...